361 research outputs found

    Trivial Transfer Learning for Low-Resource Neural Machine Translation

    Full text link
    Transfer learning has been proven as an effective technique for neural machine translation under low-resource conditions. Existing methods require a common target language, language relatedness, or specific training tricks and regimes. We present a simple transfer learning method, where we first train a "parent" model for a high-resource language pair and then continue the training on a lowresource pair only by replacing the training corpus. This "child" model performs significantly better than the baseline trained for lowresource pair only. We are the first to show this for targeting different languages, and we observe the improvements even for unrelated languages with different alphabets.Comment: Accepted to WMT18 reseach paper, Proceedings of the 3rd Conference on Machine Translation 201

    Analyzing Error Types in English-Czech Machine Translation

    Get PDF
    This paper examines two techniques of manual evaluation that can be used to identify error types of individual machine translation systems. The first technique of “blind post-editing” is being used in WMT evaluation campaigns since 2009 and manually constructed data of this type are available for various language pairs. The second technique of explicit marking of errors has been used in the past as well. We propose a method for interpreting blind post-editing data at a finer level and compare the results with explicit marking of errors. While the human annotation of either of the techniques is not exactly reproducible (relatively low agreement), both techniques lead to similar observations of differences of the systems. Specifically, we are able to suggest which errors in MT output are easy and hard to correct with no access to the source, a situation experienced by users who do not understand the source language

    Are BLEU and Meaning Representation in Opposition?

    Full text link
    One of possible ways of obtaining continuous-space sentence representations is by training neural machine translation (NMT) systems. The recent attention mechanism however removes the single point in the neural network from which the source sentence representation can be extracted. We propose several variations of the attentive NMT architecture bringing this meeting point back. Empirical evaluation suggests that the better the translation quality, the worse the learned sentence representations serve in a wide range of classification and similarity tasks.Comment: ACL 2018; 10 pages + 2 page supplementar

    Are BLEU and Meaning Representation in Opposition?

    Full text link
    One of possible ways of obtaining continuous-space sentence representations is by training neural machine translation (NMT) systems. The recent attention mechanism however removes the single point in the neural network from which the source sentence representation can be extracted. We propose several variations of the attentive NMT architecture bringing this meeting point back. Empirical evaluation suggests that the better the translation quality, the worse the learned sentence representations serve in a wide range of classification and similarity tasks.Comment: ACL 2018; 10 pages + 2 page supplementar

    Giving a Sense: A Pilot Study in Concept Annotation from Multiple Resources

    Get PDF
    We present a pilot study of a web-based annotation of words with senses. The annotated senses come from several knowledge bases and sense inventories. The study is the first step in a planned larger annotation of grounding and should allow us to select a subset of the sense sources that cover any given text reasonably well and show an acceptable level of inter-annotator agreement

    The Design of Eman, an Experiment Manager

    Get PDF
    We present eman, a tool for managing large numbers of computational experiments. Over the years of our research in machine translation (MT), we have collected a couple of ideas for efficient experimenting. We believe these ideas are generally applicable in (computational) research of any field. We incorporated them into eman in order to make them available in a command-line Unix environment. The aim of this article is to highlight the core of the many ideas. We hope the text can serve as a collection of experiment management tips and tricks for anyone, regardless their field of study or computer platform they use. The specific examples we provide in eman’s current syntax are less important but they allow us to use concrete terms. The article thus also fills the gap in eman documentation by providing some high-level overview

    Particle Swarm Optimization Submission for WMT16 Tuning Task

    Get PDF
    This paper describes our submission to the Tuning Task of WMT16. We replace the grid search implemented as part of standard minimum-error rate training (MERT) in the Moses toolkit with a search based on particle swarm optimization (PSO). An older variant of PSO has been previously successfully applied and we now test it in optimizing the Tuning Task model for English-to-Czech translation. We also adapt the method in some aspects to allow for even easier parallelization of the search

    Results of the WMT13 Metrics Shared Task

    Get PDF
    This paper presents the results of the WMT13 Metrics Shared Task. We asked participants of this task to score the outputs of the MT systems involved in WMT13 Shared Translation Task. We collected scores of 16 metrics from 8 research groups. In addition to that we computed scores of 5 standard metrics such as BLEU, WER, PER as baselines. Collected scores were evaluated in terms of system level correlation (how well each metric’s scores correlate with WMT13 official human scores) and in terms of segment level correlation (how often a metric agrees with humans in comparing two translations of a particular sentence)
    • …
    corecore